193 research outputs found

    Object Manipulation in Virtual Reality Under Increasing Levels of Translational Gain

    Get PDF
    Room-scale Virtual Reality (VR) has become an affordable consumer reality, with applications ranging from entertainment to productivity. However, the limited physical space available for room-scale VR in the typical home or office environment poses a significant problem. To solve this, physical spaces can be extended by amplifying the mapping of physical to virtual movement (translational gain). Although amplified movement has been used since the earliest days of VR, little is known about how it influences reach-based interactions with virtual objects, now a standard feature of consumer VR. Consequently, this paper explores the picking and placing of virtual objects in VR for the first time, with translational gains of between 1x (a one-to-one mapping of a 3.5m*3.5m virtual space to the same sized physical space) and 3x (10.5m*10.5m virtual mapped to 3.5m*3.5m physical). Results show that reaching accuracy is maintained for up to 2x gain, however going beyond this diminishes accuracy and increases simulator sickness and perceived workload. We suggest gain levels of 1.5x to 1.75x can be utilized without compromising the usability of a VR task, significantly expanding the bounds of interactive room-scale VR

    Evolving Gaussian Process Kernels for Translation Editing Effort Estimation

    Get PDF
    In many Natural Language Processing problems the combination of machine learning and optimization techniques is essential. One of these problems is estimating the effort required to improve, under direct human supervision, a text that has been translated using a machine translation method. Recent developments in this area have shown that Gaussian Processes can be accurate for post-editing effort prediction. However, the Gaussian Process kernel has to be chosen in advance, and this choice in- fluences the quality of the prediction. In this paper, we propose a Genetic Programming algorithm to evolve kernels for Gaussian Processes. We show that the combination of evolutionary optimization and Gaussian Processes removes the need for a-priori specification of the kernel choice, and achieves predictions that, in many cases, outperform those obtained with fixed kernels.TIN2016-78365-

    Evolutionary approaches to signal decomposition in an application service management system

    Get PDF
    The increased demand for autonomous control in enterprise information systems has generated interest on efficient global search methods for multivariate datasets in order to search for original elements in time-series patterns, and build causal models of systems interactions, utilization dependencies, and performance characteristics. In this context, activity signals deconvolution is a necessary step to achieve effective adaptive control in Application Service Management. The paper investigates the potential of population-based metaheuristic algorithms, particularly variants of particle swarm, genetic algorithms and differential evolution methods, for activity signals deconvolution when the application performance model is unknown a priori. In our approach, the Application Service Management System is treated as a black- or grey-box, and the activity signals deconvolution is formulated as a search problem, decomposing time-series that outline relations between action signals and utilization-execution time of resources. Experiments are conducted using a queue-based computing system model as a test-bed under different load conditions and search configurations. Special attention was put on high-dimensional scenarios, testing effectiveness for large-scale multivariate data analyses that can obtain a near-optimal signal decomposition solution in a short time. The experimental results reveal benefits, qualities and drawbacks of the various metaheuristic strategies selected for a given signal deconvolution problem, and confirm the potential of evolutionary-type search to effectively explore the search space even in high-dimensional cases. The approach and the algorithms investigated can be useful in support of human administrators, or in enhancing the effectiveness of feature extraction schemes that feed decision blocks of autonomous controllers

    Signatures of arithmetic simplicity in metabolic network architecture

    Get PDF
    Metabolic networks perform some of the most fundamental functions in living cells, including energy transduction and building block biosynthesis. While these are the best characterized networks in living systems, understanding their evolutionary history and complex wiring constitutes one of the most fascinating open questions in biology, intimately related to the enigma of life's origin itself. Is the evolution of metabolism subject to general principles, beyond the unpredictable accumulation of multiple historical accidents? Here we search for such principles by applying to an artificial chemical universe some of the methodologies developed for the study of genome scale models of cellular metabolism. In particular, we use metabolic flux constraint-based models to exhaustively search for artificial chemistry pathways that can optimally perform an array of elementary metabolic functions. Despite the simplicity of the model employed, we find that the ensuing pathways display a surprisingly rich set of properties, including the existence of autocatalytic cycles and hierarchical modules, the appearance of universally preferable metabolites and reactions, and a logarithmic trend of pathway length as a function of input/output molecule size. Some of these properties can be derived analytically, borrowing methods previously used in cryptography. In addition, by mapping biochemical networks onto a simplified carbon atom reaction backbone, we find that several of the properties predicted by the artificial chemistry model hold for real metabolic networks. These findings suggest that optimality principles and arithmetic simplicity might lie beneath some aspects of biochemical complexity

    Predicting Quantitative Genetic Interactions by Means of Sequential Matrix Approximation

    Get PDF
    Despite the emerging experimental techniques for perturbing multiple genes and measuring their quantitative phenotypic effects, genetic interactions have remained extremely difficult to predict on a large scale. Using a recent high-resolution screen of genetic interactions in yeast as a case study, we investigated whether the extraction of pertinent information encoded in the quantitative phenotypic measurements could be improved by computational means. By taking advantage of the observation that most gene pairs in the genetic interaction screens have no significant interactions with each other, we developed a sequential approximation procedure which ranks the mutation pairs in order of evidence for a genetic interaction. The sequential approximations can efficiently remove background variation in the double-mutation screens and give increasingly accurate estimates of the single-mutant fitness measurements. Interestingly, these estimates not only provide predictions for genetic interactions which are consistent with those obtained using the measured fitness, but they can even significantly improve the accuracy with which one can distinguish functionally-related gene pairs from the non-interacting pairs. The computational approach, in general, enables an efficient exploration and classification of genetic interactions in other studies and systems as well

    Length of Stay After Childbirth in 92 Countries and Associated Factors in 30 Low- and Middle-Income Countries: Compilation of Reported Data and a Cross-sectional Analysis from Nationally Representative Surveys

    Get PDF
    Background: Following childbirth, women need to stay sufficiently long in health facilities to receive adequate care. Little is known about length of stay following childbirth in low- and middle-income countries or its determinants. Methods and Findings: We described length of stay after facility delivery in 92 countries. We then created a conceptual framework of the main drivers of length of stay, and explored factors associated with length of stay in 30 countries using multivariable linear regression. Finally, we used multivariable logistic regression to examine the factors associated with stays that were “too short” (<24 h for vaginal deliveries and <72 h for cesarean-section deliveries). Across countries, the mean length of stay ranged from 1.3 to 6.6 d: 0.5 to 6.2 d for singleton vaginal deliveries and 2.5 to 9.3 d for cesarean-section deliveries. The percentage of women staying too short ranged from 0.2% to 83% for vaginal deliveries and from 1% to 75% for cesarean-section deliveries. Our conceptual framework identified three broad categories of factors that influenced length of stay: need-related determinants that required an indicated extension of stay, and health-system and woman/family dimensions that were drivers of inappropriately short or long stays. The factors identified as independently important in our regression analyses included cesarean-section delivery, birthweight, multiple birth, and infant survival status. Older women and women whose infants were delivered by doctors had extended lengths of stay, as did poorer women. Reliance on factors captured in secondary data that were self-reported by women up to 5 y after a live birth was the main limitation. Conclusions: Length of stay after childbirth is very variable between countries. Substantial proportions of women stay too short to receive adequate postnatal care. We need to ensure that facilities have skilled birth attendants and effective elements of care, but also that women stay long enough to benefit from these. The challenge is to commit to achieving adequate lengths of stay in low- and middle-income countries, while ensuring any additional time is used to provide high-quality and respectful care

    Multivariate modeling to identify patterns in clinical data: the example of chest pain

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In chest pain, physicians are confronted with numerous interrelationships between symptoms and with evidence for or against classifying a patient into different diagnostic categories. The aim of our study was to find natural groups of patients on the basis of risk factors, history and clinical examination data which should then be validated with patients' final diagnoses.</p> <p>Methods</p> <p>We conducted a cross-sectional diagnostic study in 74 primary care practices to establish the validity of symptoms and findings for the diagnosis of coronary heart disease. A total of 1199 patients above age 35 presenting with chest pain were included in the study. General practitioners took a standardized history and performed a physical examination. They also recorded their preliminary diagnoses, investigations and management related to the patient's chest pain. We used multiple correspondence analysis (MCA) to examine associations on variable level, and multidimensional scaling (MDS), k-means and fuzzy cluster analyses to search for subgroups on patient level. We further used heatmaps to graphically illustrate the results.</p> <p>Results</p> <p>A multiple correspondence analysis supported our data collection strategy on variable level. Six factors emerged from this analysis: „chest wall syndrome“, „vital threat“, „stomach and bowel pain“, „angina pectoris“, „chest infection syndrome“, and „ self-limiting chest pain“. MDS, k-means and fuzzy cluster analysis on patient level were not able to find distinct groups. The resulting cluster solutions were not interpretable and had insufficient statistical quality criteria.</p> <p>Conclusions</p> <p>Chest pain is a heterogeneous clinical category with no coherent associations between signs and symptoms on patient level.</p

    Bayesian inference for the information gain model

    Get PDF
    One of the most popular paradigms to use for studying human reasoning involves the Wason card selection task. In this task, the participant is presented with four cards and a conditional rule (e.g., “If there is an A on one side of the card, there is always a 2 on the other side”). Participants are asked which cards should be turned to verify whether or not the rule holds. In this simple task, participants consistently provide answers that are incorrect according to formal logic. To account for these errors, several models have been proposed, one of the most prominent being the information gain model (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). This model is based on the assumption that people independently select cards based on the expected information gain of turning a particular card. In this article, we present two estimation methods to fit the information gain model: a maximum likelihood procedure (programmed in R) and a Bayesian procedure (programmed in WinBUGS). We compare the two procedures and illustrate the flexibility of the Bayesian hierarchical procedure by applying it to data from a meta-analysis of the Wason task (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). We also show that the goodness of fit of the information gain model can be assessed by inspecting the posterior predictives of the model. These Bayesian procedures make it easy to apply the information gain model to empirical data. Supplemental materials may be downloaded along with this article from www.springerlink.com

    Natural Variation in Decision-Making Behavior in Drosophila melanogaster

    Get PDF
    There has been considerable recent interest in using Drosophila melanogaster to investigate the molecular basis of decision-making behavior. Deciding where to place eggs is likely one of the most important decisions for a female fly, as eggs are vulnerable and larvae have limited motility. Here, we show that many natural genotypes of D. melanogaster prefer to lay eggs near nutritious substrate, rather than in nutritious substrate. These preferences are highly polymorphic in both degree and direction, with considerable heritability (0.488) and evolvability
    corecore